56 research outputs found

    Coin.AI: A Proof-of-Useful-Work Scheme for Blockchain-based Distributed Deep Learning

    Get PDF
    One decade ago, Bitcoin was introduced, becoming the first cryptocurrency and establishing the concept of "blockchain" as a distributed ledger. As of today, there are many different implementations of cryptocurrencies working over a blockchain, with different approaches and philosophies. However, many of them share one common feature: they require proof-of-work to support the generation of blocks (mining) and, eventually, the generation of money. This proof-of-work scheme often consists in the resolution of a cryptography problem, most commonly breaking a hash value, which can only be achieved through brute-force. The main drawback of proof-of-work is that it requires ridiculously large amounts of energy which do not have any useful outcome beyond supporting the currency. In this paper, we present a theoretical proposal that introduces a proof-of-useful-work scheme to support a cryptocurrency running over a blockchain, which we named Coin.AI. In this system, the mining scheme requires training deep learning models, and a block is only mined when the performance of such model exceeds a threshold. The distributed system allows for nodes to verify the models delivered by miners in an easy way (certainly much more efficiently than the mining process itself), determining when a block is to be generated. Additionally, this paper presents a proof-of-storage scheme for rewarding users that provide storage for the deep learning models, as well as a theoretical dissertation on how the mechanics of the system could be articulated with the ultimate goal of democratizing access to artificial intelligence.Comment: 17 pages, 5 figure

    Evolutionary design of deep neural networks

    Get PDF
    Mención Internacional en el título de doctorFor three decades, neuroevolution has applied evolutionary computation to the optimization of the topology of artificial neural networks, with most works focusing on very simple architectures. However, times have changed, and nowadays convolutional neural networks are the industry and academia standard for solving a variety of problems, many of which remained unsolved before the discovery of this kind of networks. Convolutional neural networks involve complex topologies, and the manual design of these topologies for solving a problem at hand is expensive and inefficient. In this thesis, our aim is to use neuroevolution in order to evolve the architecture of convolutional neural networks. To do so, we have decided to try two different techniques: genetic algorithms and grammatical evolution. We have implemented a niching scheme for preserving the genetic diversity, in order to ease the construction of ensembles of neural networks. These techniques have been validated against the MNIST database for handwritten digit recognition, achieving a test error rate of 0.28%, and the OPPORTUNITY data set for human activity recognition, attaining an F1 score of 0.9275. Both results have proven very competitive when compared with the state of the art. Also, in all cases, ensembles have proven to perform better than individual models. Later, the topologies learned for MNIST were tested on EMNIST, a database recently introduced in 2017, which includes more samples and a set of letters for character recognition. Results have shown that the topologies optimized for MNIST perform well on EMNIST, proving that architectures can be reused across domains with similar characteristics. In summary, neuroevolution is an effective approach for automatically designing topologies for convolutional neural networks. However, it still remains as an unexplored field due to hardware limitations. Current advances, however, should constitute the fuel that empowers the emergence of this field, and further research should start as of today.This Ph.D. dissertation has been partially supported by the Spanish Ministry of Education, Culture and Sports under FPU fellowship with identifier FPU13/03917. This research stay has been partially co-funded by the Spanish Ministry of Education, Culture and Sports under FPU short stay grant with identifier EST15/00260.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: María Araceli Sanchís de Miguel.- Secretario: Francisco Javier Segovia Pérez.- Vocal: Simon Luca

    AWS PredSpot: Machine Learning for Predicting the Price of Spot Instances in AWS Cloud

    Get PDF
    Elastic Cloud Compute (EC2) is one of the most well-known services provided by Amazon for provisioning cloud computing resources, also known as instances. Besides the classical on-demand scheme, where users purchase compute capacity at a fixed cost, EC2 supports so-called spot instances, which are offered following a bidding scheme, where users can save up to 90% of the cost of the on-demand instance. EC2 spot instances can be a useful alternative for attaining an important reduction in infrastructure cost, but designing bidding policies can be a difficult task, since bidding under their cost will either prevent users from provisioning instances or losing those that they already own. Towards this extent, accurate forecasting of spot instance prices can be of an outstanding interest for designing working bidding policies. In this paper, we propose the use of different machine learning techniques to estimate the future price of EC2 spot instances. These include linear, ridge and lasso regressions, multilayer perceptrons, K-nearest neighbors, extra trees and random forests. The obtained performance varies significantly between instances types, and root mean squared errors ranges between values very close to zero up to values over 60 in some of the most expensive instances. Still, we can see that for most of the instances, forecasting performance is remarkably good, encouraging further research in this field of study

    Editor’s Note. Towards Blockchain Intelligence

    Get PDF
    In this special issue, we want to gather some innovative applications that are currently pushing forward the research on Blockchain technologies. In particular, we are interested also in those applications that put the focus on the data, enabling new processes that are able to leverage relevant knowledge from the data. This special issue will be successful if readers gain a better understanding on how Blockchain can be applied to very diverse areas, and might even be interested in designing, implementing and deploying an innovative solution to a completely different field of knowledge. We hope this Special Issue can provide a better understanding and key insights to readers on how Blockchain and artificial intelligence are cross-fertilizing to revolutionize many aspects in our societies

    Editor’s Note. Towards an Intelligent Society: Advances in Marketing and Neuroscience

    Get PDF
    This Special Issue focuses in cases that explore the relationship between Artificial Intelligence and marketing, as well as neuroscience. AI can be combined with specific neuroscience techniques to achieve a more successful and profitable neuromarketing. For this Special Issue, we have found that descriptions of successful use cases are highly valuable to help researchers identify fields where novel applications of AI can enhance the outcome of digital marketing and neuroscience

    Editor's Note. Towards an intelligent society: advances in marketing and neuroscience

    Get PDF
    Special Issue on Use Cases of Artificial Intelligence, Digital Marketing and Neuroscience.This Special Issue focuses in cases that explore the relationship between Artificial Intelligence and marketing, as well as neuroscience. AI can be combined with specific neuroscience techniques to achieve a more successful and profitable neuromarketing. For this Special Issue, we have found that descriptions of successful use cases are highly valuable to help researchers identify fields where novel applications of AI can enhance the outcome of digital marketing and neuroscience

    Circumstancial knowledge management for human-like interaction

    Get PDF
    This project focuses on the circumstantial aspects of human-like interaction. Its purpose is in the first place to design and develop a general-purpose situation model, which would enhance any natural interaction system by providing knowledge on the interaction context, and secondly to implement an intuitive management tool to edit the knowledge base of this situation model. The development of both the model and the edition tool has followed the usual processes of software engineering: requirements elicitation and specification, problem analysis, design of a solution, system implementation and validation. In more specific terms, an spiral lifecyle composed of three phases was followed, which is a convenient approach in research projects as all the system requirements might not be known from the very beginning. After the implementation was completed, an evaluation was carried out for observing the advantages of the edition tool over the manual edition of the model knowledge. The results of this evaluation showed that the tool provides a mechanism for the model management which is significantly faster and more accurate than the manual edition. Moreover, a subjective survey also revealed that the experiment subjects preferred the edition tool, as they considered it to be more comfortable, more intuitive, more reliable and more agile. The resulting general-purpose situation model and the edition tool are a significant contribution to the state of the art, as the previously existing situation models were ad-hoc models, i.e., models implemented for supporting an specific system, and their knowledge bases were edited manually. Finally, this work will be applied in a research project funded by the Spanish Ministry of Industry (CADOOH, TSI-020302-2011-21), and for this reason some of the future works observed in this document will be executed in the coming months. A copy of the Cognos toolkit, including the edition tool developed within this project, is available in the next site: http://labda.inf.uc3m.es/doku.php?id=es:labda_lineas:cognos.Ingeniería Informátic

    Identifying Real Estate Opportunities using Machine Learning

    Full text link
    The real estate market is exposed to many fluctuations in prices because of existing correlations with many variables, some of which cannot be controlled or might even be unknown. Housing prices can increase rapidly (or in some cases, also drop very fast), yet the numerous listings available online where houses are sold or rented are not likely to be updated that often. In some cases, individuals interested in selling a house (or apartment) might include it in some online listing, and forget about updating the price. In other cases, some individuals might be interested in deliberately setting a price below the market price in order to sell the home faster, for various reasons. In this paper, we aim at developing a machine learning application that identifies opportunities in the real estate market in real time, i.e., houses that are listed with a price substantially below the market price. This program can be useful for investors interested in the housing market. We have focused in a use case considering real estate assets located in the Salamanca district in Madrid (Spain) and listed in the most relevant Spanish online site for home sales and rentals. The application is formally implemented as a regression problem that tries to estimate the market price of a house given features retrieved from public online listings. For building this application, we have performed a feature engineering stage in order to discover relevant features that allows for attaining a high predictive performance. Several machine learning algorithms have been tested, including regression trees, k-nearest neighbors, support vector machines and neural networks, identifying advantages and handicaps of each of them.Comment: 24 pages, 13 figures, 5 table

    DataCare: Big Data Analytics Solution for Intelligent Healthcare Management

    Get PDF
    This paper presents DataCare, a solution for intelligent healthcare management. This product is able not only to retrieve and aggregate data from different key performance indicators in healthcare centers, but also to estimate future values for these key performance indicators and, as a result, fire early alerts when undesirable values are about to occur or provide recommendations to improve the quality of service. DataCare’s core processes are built over a free and open-source cross-platform document-oriented database (MongoDB), and Apache Spark, an open-source cluster-computing framework. This architecture ensures high scalability capable of processing very high data volumes coming at fast speed from a large set of sources. This article describes the architecture designed for this project and the results obtained after conducting a pilot in a healthcare center. Useful conclusions have been drawn regarding how key performance indicators change based on different situations, and how they affect patients’ satisfaction

    Feature set optimization for physical activity recognition using genetic algorithms

    Get PDF
    Proceeding of: Genetic and Evolutionary Computation Conference (GECCO 2015)Physical activity is recognized as one of the key factors for a healthy life due to its beneficial effects. The range of physical activities is very broad, and not all of them require the same effort to be performed nor have the same effects on health. For this reason, automatically recognizing the physical activity performed by a user (or patient) turns out to be an interesting research field, mainly because of two reasons: (1) it increases personal awareness about the activity being performed and its consequences on health, allowing to receive proper credit (e.g. social recognition) for the effort; and (2) it allows doctors to perform continuous remote patient monitoring. This paper proposes a new approach for improving activity recognition by describing an activity recognition chain (ARC) that is optimized by means of genetic algorithms.This optimization process determines the most suitable and informative set of features that turns out into higher recognition accuracy while reducing the total number of sensors required to track the user activity. These improvements can be translated into lower costs in hardware and less intrusive devices for the patients. In this work, for the assessment of the proposed approach versus other techniques and for replication purposes, a publicly available dataset on physical activity (PAMAP2) has been used. Experiments are designed and conducted to evaluate the proposed ARC by using leave-one-subject-out cross validation and results are encouraging, reaching an average classification accuracy of about 94%.This research was partially funded by European Union’s CIP Programme (ICT-PSP-2012) under grant agreement no. 325146 (SEACW project)
    corecore